Total Lexicalism and GASGrammars: A Direct Way to Semantics
نویسندگان
چکیده
A new sort of generative grammar (Sec2) will be demonstrated which is more radically “lexicalist” than any earlier one (Sec1). It is a modified Unification Categorial Grammar [1–4] from which even the principal syntactic “weapon” of CGs, Function Application, has been omitted. What has remained is lexical sign and the mere technique of unification as the engine of combining signs. The computation thus requires no usual linguistic technique (e.g. Move, Merge, traces [5], Function Application [6]); which promises a straightforward implementation of GASG in Prolog. Our parser decides whether a Hungarian sentence is grammatical and creates its (practically English) DRS (Sec3). 1 DRT, UCG and Total Lexicalism A “totally lexicalist” generative grammar will be demonstrated in this paper. The first motivation of the enterprise was the stubborn problem of compositionality in DRT (Discourse Representation Theory; e.g. [7], [4]). DRT is a successful attempt to extend the sentence-level Montagovian modeltheoretic semantics to the discourse level. Its crucial proposal is that a level of discourse representation must be inserted in between the language to be interpreted and the world model serving as the context of interpretation. The insertion of this level, however, has given rise to a double problem of compositionality (language → DRS, DRS → world model), at least according to the very strict sense of the Fregean principle of compositionality introduced by Montague [8]. As for the DRS → world model transition Zeevat [2] has provided a compositional solution, which could successfully be built in the new version of DRT [4]. As for the language → DRS transition, however, the authors admit (p195) that no (properly) compositional solution could be found in the last two decades. The failure of elaborating a properly compositional solution to the language → DRS transition arises from the fundamental incompatibility of the strictly hierarchically organized generative syntactic phrase structures (PS; e.g. [9], [5]) with the basically unordered DRSs. Nowadays [2], [4] some kind of Categorial Grammar (CG) is held to promise the best chance for capturing the language → DRS transition in a properly compositional manner. The reason lies in the fact that, in a CG system, language-specific information (about how words can combine to form constituents, and then sentences), stored in PS rules in the transformational generative theory, is stored in the Lexicon; the reduced syntax only “concatenates”: it permits the words with compatible lexical information to combine (this operation of concatenation is referred to as Function Application). The problem with Classical CG is that it has only a context free generative capacity, which is held to be insufficient for the description of human languages. There seem to be two ways to increase the generative capacity of CCG: to let in, in opposition to the original goals, a few combinatorial means or to introduce the technique of unification, applied e.g. in Prolog (UCG). It is straightforward in the spirit of what has been said so far that DRT is (more) compatible with UCG insisting on a reduced syntax. UCG is a monostratal grammar, which is based on the formalized notion of the Saussurean sign: a structure that collects a number of levels of linguistic description and expresses relations between the levels by sharing variables in the description of the level information [3 : p145]. The set of well-formed expressions is defined by specifying a number of such signs in the lexicon and by closing them under rule applications (i.e. the selected lexical signs can be combined to form sentences via a finite number of rule applications). In monostratal grammars the syntactic and semantic operations are just aspects of the same operation. A prime example of such grammars, besides UCG, is HPSG. The basic problem with UCG, which has amounted to the starting-point of GASG, lies in the fact that syntax, deprived of the information concerning sentence cohesion in favor of the unification mechanism and reduced to the primitive task of combining adjacent words, will produce linguistically irrelevant constituents. According to Karttunen’s [1 : p19] remark on UCG trees: they look like PS trees but they are only “analysis trees”; and he adds “all that matters is the resulting [morphological] feature set.” Let us take this latter remark on trees and feature sets seriously: adjacency of words is to be registered in the course of analysis exclusively and precisely in the linguistically significant cases. The corresponding technique is to be based on an approach where adjacency and order among words are treated by, instead of the usual categorial apparatus, the same technique of unification as morphological cohesion. And what will be then the “engine” combining words to form sentences (since in CGs the lexical features of words only serve as filters to avoid inappropriate combinations)? There is no need for a separate engine at all! The engine must be unification itself, which is capable of running Prolog programs properly. The rich description of a lexical sign serves a double purpose: it characterizes the potential environment of the given sign in possible grammatical sentences in order for the sign to find the morphologically (or in other ways) compatible elements and to avoid the incompatible ones in the course of forming a sentence, and the lexical description characterizes the sign itself in order for other words to find (or not to find) it, on the basis of similar “environmental descriptions” belonging to the lexical characterizations of these other words. And while the selected words are finding each other on the basis of their formal features suitable for unification, their semantic features are also being unified simultaneously; so by the end of a successful building it will have been verified that a particular sequence of fully inflected words constitutes a grammatical sentence, and its semantic representation, a DRS, will also have been at our disposal. Sec2 provides the system of definitions of generative argument structure grammars, and in the last section our parser is sketched. 2 Definition System of GASGrammars First of all, we provide the abstract definition of language, which is similar to the one in [6]. Different alphabets (e.g. that of sounds and intonational symbols) can be considered, however, depending on the task, and the definition of phonological model is ambitious: it is the (morpho-) phonologist’s task to collect both the relevant set of morpheme segments and the relations among them. [Def1: 1.1. Let A be a finite set: the alphabet. Let # and “.” are special symbols which are no members of A: the space symbol and the full stop. Suppose that, together with other symbols, they constitute a set Y , that of auxiliary symbols. A member s of (A∪ Y )∗ is called a sentence if at least one of its members is an element of A, (s)1 6= #, (s)1 = . , there are no further full stops in the list, and (s)i = # = (s)i+1 for no i. 1.2. An element of A∗ is the i-th word of a sentence s if it is the affix of s between the i − 1-th and the i-th symbol #; the first word is the prefix of s before the first #, and if the last # is the j-th, the suffix of s after it (and before the full stop) is the j + 1-th, or last, word. 1.3. We call a subset L of (A ∪ Y )∗ a language (over alphabet A) if all of its members are sentences. 1.4. We call Phon = 〈Mors, Rel〉 a phonological model (over alphabet A) if Mors is a subset of A∗, called a set of morpheme segments, and Rel is a set of relations in Mors.] Numbering will prove to be a crucial question because corresponding elements of intricately related huge representations should be referred to. [Def2: 2.1. Let s be a sentence of a language L over an alphabet A. We call an element n of (N)∗ a (three-dimensional) numbering if (n)1 = 〈1, 1, 1〉, [if (n)m = 〈i, j, k〉, either the first projection of (n)m+1 is i or (n)m+1 = 〈i+1, 1, 1〉], and [for each number i in the first projection, the set of second elements consists of natural numbers from 1 to a maximal value p, and for each pair 〈i, j〉 there are exactly the following three members in the numbering: 〈i, j, 1〉, 〈i, j, 2〉 and 〈i, j, 3〉, necessarily in this order (but not necessarily next to each other)]. 2.2. An element mos of (N ×A∗)∗ is a morphological segmentation of s if [the 1 In Hungarian, for instance, but not in English, the following relations necessarily belong to Rel: [the morpheme segment in question consists of a single vowel which is j or it is empty], [. . . is a, á, e or é]. We assume that the 3sg. possessive morpheme consists of such two segments (and a third empty one): e.g. szárny-a-ként ‘wing-poss3sgas’ (as its wing), szárny-á-t ‘wing-poss3sg-ACC,’ fej-e-ként ‘head-poss3sg-as,’ fej-é-t ‘head-poss3sg-ACC,’ part-ja-ként ‘beach-poss3sg-as,’ part-já-t ‘beach-poss3sg-ACC,’ medvé-je-ként ‘bear-poss3sg-as,’ medvé-jé-t ‘bear-poss3sg-ACC.’ [1, 2, 3]-projection of mos is a numbering (the numbering of mos)], [it is excluded in the case of each pair 〈i, j〉 that all three fourth members belonging to the triples 〈i, j, 1〉, 〈i, j, 2〉 and 〈i, j, 3〉 in mos are empty lists], and [for each number u of the domain of the first projection of mos, the u−th word of s coincides with the concatenation of the fourth projection of the element of mos of the form 〈u, 1, 1, 〉 with the fourth projections of all the following elements with number u as its first projection, just in the order in mos]. 2.3. If 〈i, j, k, a〉 is an element of mos, we say that a is the 〈i, j, k〉-th morph segment of the given morphological segmentation; we can also say that the triple consisting of the 〈i, j, 1〉-st, 〈i, j, 2〉-nd and 〈i, j, 3〉-rd morph segments, respectively, is the 〈i, j〉-th morph of mos.] Thus each morpheme is divided into exactly three segments, 〈i, j, 1〉, 〈i, j, 2〉 and 〈i, j, 3〉 (out of which at most two are allowed to be empty). Why? In Semitic languages certain stems are discontinuous units consisting of three segments between which other morphemes are to be inserted in. It is allowed in GASG that the cohesion between a morpheme and a particular segment of another morpheme is stronger that the cohesion between the three segments of the latter. In Hungarian, segments of the same morpheme can never be separated from each other. It is useful, however, to refer to a certain segment of a morpheme — in cases where another morpheme determines just the segment in question. Segmentation into just three parts is proposed as a language universal. Important numbering techniques are defined below again. [Def3: 3.1. We call an element m of (N)* a strict (two-dimensional) numbering if (m)1 = 〈1, 1〉, and [if (m)k = 〈i, j〉, then (m)k+1 = 〈i, j + 1〉 or 〈i+ 1, 1〉]. 3.2. A two-dimensional numberingm is a homomorphic correspondent of a threedimensional numbering n if there is a function hom such that for each triple 〈i, j, k〉(k = 1, 2, 3) occurring in n, hom(〈i, j, k〉) = 〈i, j〉; which can be said as follows: member 〈i, j, k〉 of the three-dimensional numbering is the k-th segment of member 〈i, j〉 of the two-dimensional numbering.] Despite their great length, Def4-6 are worth commenting together because the intricate construction of gasg’s described in Def4 can be evaluated through understanding its functioning: generation (acceptance) of sentences. [Def4: 4.1. A sextuple G = 〈A,Phon,B, int,X,R〉 is a generative argument structure grammar (gasg) belonging to the phonological model Phon = 〈Mors, Rel〉 over alphabet A (see def1.4.) if [X is a list of lexical items [def4.3.] whose members are elements of Lex(Term)], and [R is a ranked rule system [def4.4.] also over term set B [def4.2.]. 4.2. B, the set of basic terms, is the sequence of the following sets: Con(j) = ⋃ Con(j)i, for j = 1, 2, 31, 32, and i = 0, 1, 2, . . .: finite sets of constants of arity i, 2 In this footnote Hungarian morphs are demonstrated with stable first and third segments but altering middle ones: al-hat ‘sleep-may,’ szúr-hat ‘prick-may,’ kér-het ‘ask-can,’ űz-het ‘chase-can.’ Besides this frontness harmony, other morphemes are sensitive to roundness as well. Icon(j) = ⋃ Icon(j)i, forj = 1, 2, and i = 1, 2,: finite sets of interpretable constants of arity i; int can be defined here as a total function int: Icon(j)→ Rel, Numb: a set of numbers that necessarily includes all natural numbers, VAR0: variables that can substitute for elements of Con(2)0 and Numb, Rank = {r1, . . . , rK} (K=7). 4.3. A lexical item is a triple li = 〈ownc, frmc, pdrs〉 where [1–3]: 1. Set ownc, own word conditions, is a subset of the following set Form(1) of well-formed fomulas: (a) For an arbitrary p ∈ Icon(1)k, k = 1or2, the expression p(t1, . . . , tk) ∈ Form(1) where an argument ti is a term, i = 1, 2, . . . , k. (b) Triples of numbers, precisely, elements of Numb × {1, 2, 3} are terms; and lists of terms are also terms. (c) Formula p ∨ q is an element of Form(1) if p and q are its elements. 2. Set frmc, formal conditions, is a subset of the following set Form(2) of wellformed fomulas: (a) For an arbitrary p ∈ Con(2)k, k = 2, 3, . . . , the expression p(t1, . . . , tk) ∈ Form(2) where argument ti is a term for i = 2, . . . , k, but ti / ∈ Rank for these values of i, whereas t1 ∈ Rank. We call the formulas defined in this step ranked formulas. We also say that out of these ranked formulas those which are members of set frmc belong to the given lexical item li. (b) For an arbitrary p ∈ Con(2)k or p ∈ Icon(2)k, k = 1, 2, . . . , the expression p(t1, . . . , tk) ∈ Form(2) where argument t)i is a term for i = 1, 2, . . . , k, but ti / ∈ Rank for these values of i. (c) Elements of ⋃ Con(2)i, for i = 0, 1, 2, . . . , are terms; elements of ⋃ Icon(2)i, for i = 0, 1, 2, . . . , are terms; elements of Numb and VAR0 are terms; lists of terms are also terms; elements of Form(2) which are not ranked formulas are all terms too. (d) Formula p ∨ q is an element of Form(2) if p and q are its elements. (e) Formula p ∨ q is an element of Form(2) if p and q are its elements. 3. Set pdrs, the proto-DRS provided by the given lexical item, is a pair 〈 bdrs, embc 〉 where bdrs (the basic DRS) is a subset of the following set Form(31) of well-formed fomulas, and embc (the embedding conditions) is a subset of set Form(32) of well-formed fomulas defined after that: (a) For an arbitrary p ∈ Con(31)k, the expression p(t1, . . . , tk) ∈ Form(31) where an argument ti is a term. If the given formula is an element of subset bdrs, the terms occupying its argment positions are called referents belonging to bdrs. (b) Elements of {ref }×(NumbVAR0) are terms where ref is a distinguished element of Con(31)3. (c) The expression p(t1, . . . , tk) ∈ Form(32) where argument ti is a term for i = 1, 2, . . . , k, and p ∈ {oldref, newref} = Con(32)1 or p ∈ {fixpoint, 〈,≤, 6=,∼} = Con(32)2. (d) Elements of {ref } × (NumbVAR0) are terms where ref is a distinguished element of Con(32)3, and it is also a restriction that a quadruple ref(i, j, k) can be considered here a term if it is a referent belonging to set bdrs. 4.4. The ranked rule system denoted by R is defined as an arbitrary subset of the set rr(Form(2)) of ranked rules over set Form(2) of formulas (defined in def4.3.2.): all formulas of the form p ← q is an element of rr(Form(2) if p is a ranked formula, and [q is a conjunction of elements of Form(2): q = q1∧q2∧. . .∧qd for some d].] [Def5: 5.1. An element num of (N ×X)∗ is called a numeration (over a gasg G) if [the [1,2]-projection of the list is a strict two-dimensional numbering], and [members of the third projection are lexical items (coming from the fifth component of G)]. 5.2. If 〈i, j, li〉 is an element of num, we can say that the given lexical item li is the 〈i, j〉-th element of the numeration.] [Def6: 6.1. A sentence s – a member of (A∪Y )∗ in Def1, is grammatical according to a gasg G = 〈A,Phon, B,int, X,R〉 if there is a numeration num of (N ×X)∗, there is a (cohesion) function coh: VAR0 → Con(2)0 ∪Numb (def4.2.!), and sentence s has a morphological segmentation mos of (N3×A∗)∗ (Def2.2.) such that the numbering of numeration num is a homomorphic correspondent of the numbering of segmentation mos and the 〈coh,int〉 pair satisfies [def6.2.] numeration num according to rule system R. 6.2. Pair 〈coh,int〉 satisfies (def6.2.) numeration num according to rule system R if for each possible 〈i, j〉, the lexical item li which is the 〈i, j〉-th member of the numeration is satisfied. This lexical item li = 〈ownc,frmc,pdrs〉 is satisfied if its all three components are satisfied. 1. Formula set ownc is satisfied if, [in the case of 4.3.1.a., 〈int(t1), . . . ,int(tk) ∈ int(p) ∈ Rel, where (Rel is the set of relations in the phonological model Phon belonging to gasg G, and) function int′ is an extension of int that assigns a number triple 〈i, j, k〉 the 〈i, j, k〉-th morph segment of the morphological segmentation mos, and a number pair 〈i, j〉 the 〈i, j, 1〉-st morph of mos]; [in the case of 4.3.1.c., p is satisfied or q is satisfied]. 2. Formula set frmc is satisfied if one of the cases discussed below is satisfied. First of all, however, coh’(p) is to defined for elements of formulas of Form(2) and Form(3): it is a formula whose only difference relative to p is that each occurrences of variable v (elements of VAR0) has been replaced with coh(v). In the case of 4.3.2.a., a ranked formula p(t1, . . . , tk) is satisfied if there is a formula p(t1, . . . , t ′ k)← q′ in rule system R such that coh(p(t1, . . . , t ′ k)) = p(t1, . . . , tk), there is a formula q such that coh(q) = coh(q′), and q belongs to the 〈i′, j′〉-th lexical item in numeration num for an arbitrary pair 〈i′, j′〉, and coh(q′) is satisfied. In the case of 4.3.2.b., a formula p(t1, . . . , tk) is satisfied if EITHER there is a formula p(t1, . . . , t ′ k)← q′ in rule system R such that coh(p(t1, . . . , t ′ k)) =coh(p(t1, . . . , tk)), there is a formula q such that coh(q) = coh(q′), and q belongs to the 〈i′, j′〉-th lexical item in numeration num for an arbitrary pair 〈i′, j′〉, and coh(q′) is satisfied (indirect satisfaction), OR coh(p(t1, . . . , t ′ k)) belongs to the 〈i′, j′〉-th lexical item in numeration num for an arbitrary pair 〈i′, j′〉 (direct satisfaction), OR 〈int(coh(t1)),. . . ,int(coh(tk))〉 ∈int(p) ∈Rel (int′ has been defined in def6.2.1. (direct satisfaction). In the case of 4.3.2.d., p ∨ q is satisfied if p is satisfied or q is satisfied. In the case of 4.3.2.e., p ∧ q is satisfied if p is satisfied and q is satisfied. 3. Formula sets bdrs and embc are satisfied if each formula p that can be found in one of them is satisfied. This arbitrary formula p is satisfied without conditions. 6.3. Let us denote sem the set consisting of the 〈coh(bdrs),coh(embc)〉 for all lexical items in the numeration. We can call it the discourse-semantic representation of sentence s.] In harmony with our “total lexicalism,” lexical item is the crucial means of a gasg (def4.3.). Its first component out of the three (def4.3.1.) consists of conditions on the “own word” deciding whether a morpheme in a (potential) sentence can be considered to be a realization of the given lexical item (see def6.2.1. and the last two footnotes on allomorphs). It is our new proposal [12] that, instead of fully inflected words (located in a multiple inheritance network), li’s are assigned to morphemes – realizing a “totally lexicalist morphology” The component of formal conditions (def4.3.2.) is responsible for selecting the other li’s with which the li in question can stand in certain grammatical relations (def6.2.2.). It imposes requirements on them and exhibits its own properties to them. As for the range of grammatical relations in a universal perspective [10], there are unidirectional relations, e.g. an adjective “seeks” its noun, where the “seeking” li may show certain properties (number, gender, case, definiteness) of the “sought” one, and bidirectional relations, e.g. an object and its regent (in whose argument structure the former is) “seek” each other, where the argument may have a case-marking depending on the regent, and the regent may show certain properties (number, person, gender, definiteness) of the argument. The rule system in the sixth component of gasg’s (def4.4.), among others, makes it possible to store the above listed language-specific factors outside li’s so frmc (def4.3.2.) is to contain only references to the relations themselves. It is ranked implication rules (def4.3.2., def6.2.2.) that we consider to be peculiar to GASG. In addition to satisfying a requirement described in a li directly by proving that either some property of another li is appropriate or the morphemes / words in the segmented sentence stand in a suitable configuration, the requirement in question can be satisfied indirectly by proving that there is a lexical item which has a competitive requirement ranked higher. This optimalistic technique enables us to dispense with phrase structure rules: the essence (precise details in [13,14]) is that, if word (morpheme) w1 stands in a certain relation with w2, w1 is required to be adjacent to w2, which can be satisfied, of course, by putting them next to each other in a sentence, but we can have recourse to an indirect way of satisfaction by inserting other words between them whose adjacency requirements (concerning either w1 or w2) are ranked higher (and these intervening words, in a language-specific way, may be allowed to “bring” their dependents). In def4.2. seven ranks are proposed as a universal concerning the complexity of human languages. The discourse-semantic component of li’s (def4.3.3.) is practically intended to check nothing (def6.2.3.) but their “sum” coming from the whole numeration (def6.3.) provides a “proto-” DRS in the case of sentences that have proved to be grammatical. Our proto-DRSs seem to have a very simple structure in comparison to DRSs with the multiply embedded box constructions demonstrated in [11]. Nevertheless, they store the same information due to the conditions of a special status defined in def4.3.3.2. Moreover, several cases of ambiguities can simply be traced back to an underspecified state of these special conditions. Let us consider an illustration of these facilities. (1) Most widowers court a blonde. (2) most(e0; e1, e2) fixpoint(e0), e0 < e1, e1 < e2, newref(e0) widower(e1; r2) newref(e1), newref(e2) court(e2; r2, r3) newref(r2), e1 ≈ r2 blonde(r3) newref(r3), r3 ≈ ??? (3) e2 ≈ r3: ‘It is often true that if someone is a widower he courts a blonde.’ e0 ≈ r3: ‘There is a blonde whom most widowers court.’ The basic proposition (whose eventuality referent is e0) is that a situation [e1: somebody is a widower] often implies another situation [e2: he courts somebody]; symbols ‘<’ refer to these situations’ not being facts but their and some of their characters’ belonging to fictive worlds [15]. The widower necessarily belongs to the fictive world of our thinking about an abstract situation (e1 ≈ r2). But which world does the blonde belong to? Referent r3 is looking for its place. . . And it can find its place in different worlds (3) – without assuming different syntactic structures behind the two readings. Let us finish the section with the definition of a language generated by a gasg: [Def7: In the circumstances defined above in def6, we can say that gasg G generates sentence s through segmentation mos and numeration num, and G assigns 3 Semantic restrictions (e.g. on the [+human] status of an argument) can be put in the set of formal conditions (def4.3.2.) among morphologic and syntactic ones. 4 The freedom in finding the appropriate world has language-dependent restrictions depending also on the argument status and other grammatical relations of the li of the indefinite article in question, of course. the given sentence DRS sem as its discourse-semantic representation. It can also be said in this situation that gasg G has generated reading 〈s,mos, num, sem〉. L(G) ⊂ (A∪Y )∗ is called the language defined by gasg G if L(G) consists of the sentences generated by G.] 3 Implementation in Prolog Our work is permanently developed, and the version which is available now can parse uncompound neutral Hungarian sentences. In our parser we insist on the theoretically clear principles of GASG, but naturally we have to make some technical changes according to the special features of programming in Prolog. Hence, parts of the lexical items in GASG are stored in different places in the program. The database section contains the lexical items which are morphemes and consist of the ownword, phonological features and some inherent syntactic conditions (e.g. the argument structure). Other environmental conditions and properties of morphemes that a lexical item searches are put down in the synrelations predicate. This part means the syntactic parsing together with a checking that contains the immprec predicate. The third part of a GASG lexical item – which is semantics – is represented in the semantics predicates. The parsing starts with the main predicate gramm, which, after a successful phonological and morphosyntactic parsing, gives semantic representation formulated as a DRS:
منابع مشابه
Analyzing the problem of meaning in Shabastari’s Golshane Raz
Man has always been finding a complete model for semantics since the beginning. A model which can as a paradigm affects all branches of sciences. In the view of author, such a model can be found in Golshane Raz. Introducing the model from the work mentioned, the paper has tried to explain its sub structural foundations in three fields of ontology, epistemology and semantics. Some of the foundat...
متن کاملApplication of Frame Semantics to Teaching Seeing and Hearing Vocabulary to Iranian EFL Learners
A term in one language rarely has an absolute synonymous meaning in the same language; besides, it rarely has an equivalent meaning in an L2. English synonyms of seeing and hearing are particularly grammatically and semantically different. Frame semantics is a good tool for discovering differences between synonymous words in L2 and differences between supposed L1 and L2 equivalents. Vocabulary ...
متن کاملHierarchical Fuzzy Clustering Semantics (HFCS) in Web Document for Discovering Latent Semantics
This paper discusses about the future of the World Wide Web development, called Semantic Web. Undoubtedly, Web service is one of the most important services on the Internet, which has had the greatest impact on the generalization of the Internet in human societies. Internet penetration has been an effective factor in growth of the volume of information on the Web. The massive growth of informat...
متن کاملDeclarative Semantics in Object-Oriented Software Development - A Taxonomy and Survey
One of the modern paradigms to develop an application is object oriented analysis and design. In this paradigm, there are several objects and each object plays some specific roles in applications. In an application, we must distinguish between procedural semantics and declarative semantics for their implementation in a specific programming language. For the procedural semantics, we can write a ...
متن کاملThe Comparative Semantics of ‘Recitation’ and ‘Chanting’ in the Holy Quran and Hadith’s Viewpoint
In linguistics, a study of the relation between word and meaning is called semantics. Semantics is a term for referring to study the meaning of elements of a language, particularly to study the real context of sentences and phrases of a language. The meaning of ‘recitation’ and ‘chanting’ in terms of Quranic, Hadith and idiomatic applications will be identified in this p...
متن کاملWhat’s behind meaning?
The paper addresses the main questions to be dealt with by any semantic theory which is committed to provide an explanation of how meaning is possible. On one side the paper argues that the resources provided by the development of mathematical logic, theoretical computer science, cognitive psychology, and general linguistics in the 20th Century, however indispensable to investigate the structur...
متن کامل